perm filename KYOTO[E88,JMC] blob sn#861116 filedate 1988-09-19 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00007 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%kyoto[e88,jmc]		Kyoto prize philosophical lecture
C00004 00003	Aug 30
C00007 00004	Another start
C00012 00005	Yet another start
C00019 00006	\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
C00020 00007		It is the foundation's position that the significance of the
C00022 ENDMK
CāŠ—;
%kyoto[e88,jmc]		Kyoto prize philosophical lecture
\input memo.tex[let,jmc]
\title{MY PHILOSOPHY OF RESEARCH IN ARTIFICIAL INTELLIGENCE}
%That's a tentative title.

Notes: not for final version.

	It seems to me that the results of my work on AI were
motivated by an uncommon philosophical point of view.
Parts of this point of view are more common today than thirty
years ago, but it still seems worthwhile to describe it
explicitly.

	I have had to reject a very large fraction of my ideas ---
as well as other people's.

	AI is a difficult problem.  It is worthwhile doing a lot
of work to make a little progress, but it must be progress towards
the main goal and not merely a demonstration of some metaphor.
Metaphors are bad.

	One is discouraged and gives up temporarily.  But one
always returns to a problem.

	When I solve a problem, I often can't understand why I
didn't solve it 20 years ago.

	I don't know why so many other people become discouraged
with logic.

Nailing the flag to the mast.

Turing si, Wiener and von Neumann no!
All those things going under the name of cybernetics.
Aug 30

	Being invited to discuss the philosophical basis of one's
research leads to introspection.  The results were interesting to
me, and I hope they will be interesting to some of you.

	The work I want to discuss with you is the approach to
artificial intelligence based on the use of mathematical logic
to formalize common sense knowledge and reasoning.  My other work,
in time-sharing, LISP and mathematical theory of computation,
is more straightforward.  It's just a question of being there
``fustest with the mostest''.

	Using logic in AI involves philosophy in two ways.  One of
them, which I have discussed many times, is that making an
intelligent computer program requires providing it with
data structures into which it can fit particular facts and
beliefs.  Such a framework is traditionally, according to
philosophical tradition, a metaphysics, an ontology and an
epistemology.  I'll return to it later.

	The other is one's personal philosophy of research.  I first
proposed mathematical logic as the basis of AI in a paper given
in 1958 and published in 1960 and entitled ``Programs with Common
Sense''.  It seems to me that this approach to AI met with immediate
respect, but besides my work, only sporadic efforts were made
to pursue it.  Therefore, because of my inadequacies, progress
was slow.

	If that were all, it would be reasonable to conclude
that it was just one more bad idea.  However, within the last
ten years the approach has been picked up by many people and
now progress is rather rapid.  What I want to discuss and speculate
about is the 20 year delay.  In fact the delay could be considered
as 30 years, since interest in AI began 10 years earlier and the
conceptual tools that I proposed using in 1958 were all available
earlier.


Another start

Philosophy and Artificial Intelligence

	The Inamori Foundation's invitation to emphasize philosophy of
research is especially appropriate for this lecture, because
artificial intelligence is the subject and especially because the
research is based on the use of mathematical logic.  Some of the main
problems of AI research are philosophical.

	AI research aims to build computer programs that can behave
intelligently and achieve goals in information situations similar to
those faced by human beings.  This {\it information situation} is
characterized by partial information, by uncertainty about what is
relevant and partial information about the laws governing the effects
of the program's actions.  Making a program that can acquire
information by communication or observation requires understanding
what information is and how it can be represented in the memory of a
computer and subsequently used.  It requires reasoning at least to the
extent of allowing the use of information not specifically encoded for
use in what subsequently turns out to be the goals of the system.
This subject is what the philosophers call epistemology.  However, it
requires an attitude philosophers have only recently begun to take ---
what Daniel Dennett calls the ``design stance.''

	The design stance involves asking what kinds of information
can be obtained and how it can be used to achieve goals.  It is
natural to AI, but philosophers have rarely recognized it as a
possibility.  They have preferred a ``naturalistic stance'' in which
one asks what human knowledge seeking is like.  This often leads them
to propose mechanisms for seeking knowledge that won't work or, even
worse, to propose what they imagine to be mechanisms that aren't even
well defined.

	A big part of philosophy, perhaps most of what has
been written by philosophers, is concerned with getting a
clear general view of the world and of the role of intelligence
in understanding it.  Little of this writing is based on what
the philosopher Daniel Dennett calls {\it the design stance}, i.e.
looking at the world from the point of view of designing
a computer system that will acquire information about the
world and use it to achieve goals that are set for it.  The
designer of an intelligent system needs to build into it a
general view of the aspect of the world with which it must
deal and which can serve as a framework into which particular
facts can be fitted.  Formulating such a view  is a problem
for which AI research might look to philosophy for help.  However, AI
research seems to me to lead to considering somewhat different issues
than have heretofore concerned philosophers and {\it perhaps} to a
somewhat different point of view concerning the issues AI and
philosophy have in common.

	I used {\it perhaps} in the previous paragraph, because
I'm not sure to what extent my different point of view from
most philosophers is motivated by AI and to what extent it
would be different even if I didn't work on AI.
Yet another start

	This lecture concerns the aspects of artificial intelligence
on which I have worked with emphasis on their intellectual background
and the relevant philosophy of scientific and engineering research.

Hixon

Turing had the right idea, but I didn't know about that.

biology vs. computer science

Calling it AI was the right decision.  When the results are
less than human intelligence, the name draws critical fire,
but we can take that.  Much more important it keeps at least
some of the attention of the workers in the field on the
long term goals, and provides a basis for evaluating work.
When IBM Research declared in 1959, apparently for reasons
of public relations, that they were just doing
symbolic computation and not AI work, their progress quickly
petered out, and they made little contribution to AI thereafter.

	The present company suggests an analogy with linguistics.
Most (maybe all) work in linguistics regards it a the study of a
biological and psychological phenomenon.  --- the communication among
human beings.  One studies, by the methods of science, observation and
experiment, how this communication is performed.  However, another
approach is possible.  Suppose we consider a collection of intelligent
systems, each with certain ways of obtaining knowledge about the
world, and which must communicate with each other in order to achieve
their goals.  They have not had the same experience, and yet they need
ways of referring to the world that are mutually comprehensible (whatever
that means).  Achieving successful communication puts constraints
on the languages they use.  A designer of a language for their
communication would have to understand these constraints.

	In fact the engineering problem of designing suitable
languages exists in achieving flexible electronic data interchange.
As long as the subjects of communication are sufficiently restricted
in advance, it isn't hard to devise suitable conventions.  However,
devising a general common business communication language is a far
from solved problem.  I address it in (McCarthy 1983), and it has a
lot in common with the problem of devising an internal language for
the use of AI systems.

***
I avoided my seniors, rather than either following them or opposing
them.  I suspect this would not be good advice in general.  However,
it seems to have turned out well in artificial intelligence in that
the ideas related to AI proposed by most of them --- von Neumann,
Wiener and McCulloch have not so far been fruitful.  In my opinion,
only Turing proposed the line of research that has given us the
results in AI that have been achieved so far.  Of course, the older
proposals have not been refuted, and it is conceivable that someone
will show that we are on the wrong track, and the older ideas are
correct after all.  Connectionism may be regarded as one such gamble.

***
My interest in making intelligent machines began after I attended some
sessions of the Hixon Symposium on cerebral mechanisms in Behavior
held in September 1948 at the California Institute of Technology where
I was a beginning graduate student in mathematics.  I have always
supposed that the subject was discussed at the symposium in some form.
However, when I recently looked at the Proceedings, I found no trace
of such a discussion.  Analogies between electronic computing devices
and the brain were discussed, but the problem of making the electronic
devices behave intelligently were not.

common sense physics

history and autobiography
landmark ideas and accomplishments
opinions

Here's this latest argument
triage of the papers when the coffee spills on the table.

	These issues are still alive in some quarters.  Consider the
following problem.  Imagine that we have to program a robot of human
shape and size and with human-sized limbs.  We would like to add to
this robot understanding of the behavior of flowing liquids so that
it will behave appropriately.  The example of appropriate behavior
we have in mind is that the robot may be sitting at a table with
people discussing some papers when someone spills a cup of coffee.
Some of the papers will be threatened with getting wet, and the
robot should do its part in minimizing the damage.

laws of physics, Navier-Stokes equations.
\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
\smallskip\noindent{This draft of KYOTO[E88,JMC] TEXed on \jmcdate\ at \theTime}
\vfill\eject\end
	It is the foundation's position that the significance of the
Commemorative Lecture Meeting is to provide an opportunity for the
students and general audience to come in touch with profound
personality of the laureates directly who are the intellectual of the
world, and especially in providing the younger generation, who bear
the responsibility for the future, with an opportunity for
enlightenment with many suggestions and spurs through the speeches.
Accordingly, we would appreciate it if you could touch as much as
possible anything else you would like to pass on to coming generations
on you view of life and beliefs through your past half of life, such
as on your early days, turning points of your life, episodes of your
invention, discovery and/or creativity, research mind and attitude to
science, ideal way of the relation of master and pupil, and etc..